Generative AI’s cybersecurity potential is clear, but so far it’s given hackers the upper hand
Generative AI has opened up a new frontier in the ongoing cyber arms race between the security community and cyber criminals, and a leading security researcher has warned attackers may have more to gain from the technology.
Charl van der Walt, head of security research at Orange Cyberdefense, has argued that while it’s still fairly early to make any definite comments on the security implications of the technology, he believes generative AI may be more valuable to attackers in these early years.
Van der Walt made these claims while launching Orange Cyberdefense’s annual Security Navigator 2025 threat intelligence report, which collates the various offensive and defensive capabilities generative AI has to offer.
The report concluded that there is a “strong argument” that generative AI tools will have an “asymmetric impact on security, strongly favoring the offensive side.”
Although much has been made of generative AI’s ability to help attackers generate realistic assets for social engineering attacks, such as phishing pages or messages, van der Walt argued that LLMs will mainly help attackers by supercharging the volume of attacks they can levy against targets.
“The ability of a GPT, a content creator, that is spoken about the most is that you can generate more believable content. I’m not convinced that more believable content is really what the criminal cares about, it’s volume, I think volume is the key factor,” he said.
In fact, other than generating content for social engineering attacks, van der Walt said his research indicates more novel offensive applications of generative AI such as malicious LLMs haven’t meaningfully progressed the threat landscape as yet.
“I haven’t seen anything and have no sense that anything else is being done using AI that couldn’t have been done before … our assessment is that as a tool in the hands of the adversary, at this point we don’t see anything that fundamentally changes the threat landscape.”
Defenders’ ‘data advantage’ won’t have much bearing on generative AI arms race
Although some security leaders have made lofty claims about how the cyber defenders’ “data advantage” will give them the upper hand when it comes to detecting and remediating threats, van der Walt argued this doesn’t quite apply to AI’s more recent generative shift.
Speaking to ITPro, Van der Walt said this data advantage is more powerful when talking about honing the predictive power of traditional forms of AI , which have been around for many years already.
“I don’t think that the defender necessarily and automatically benefits immediately from having Gen AI, and if we were going to have benefited from having machine learning in general that would have already happened.”
He was quick to caveat this by saying the data advantage has been a significant factor in the cyber arms race, but argued this came in the form of threat detection with tools like endpoint detection response (EDR) and intrusion detection systems (IDS).
However, he said it’s harder to see this transformative potential with generative AI.
For example, when posed with the potential use of retrieval augmented generation (RAG) to detect AI-generated social engineering content, van der Walt said the fundamental imbalance between attackers and defenders will remain unchanged.
“I think that will be useful but again, in the tension between the attacker and the defender, the asymmetry favors the attacker. Now the defender has to identify every single fake thing that comes in, and then the attacker only has to get one fake thing through. So if they’re both accelerated I think the attacker is going to benefit more.”
Source link